Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Write your Algorithm
  • Step 6: Test Your Algorithm

Step 0: Import Datasets

Make sure that you've downloaded the required human and dog datasets:

  • Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.

  • Download the human dataset. Unzip the folder and place it in the home diretcory, at location /lfw.

Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.

In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.

In [6]:
import numpy as np
from glob import glob

# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
example_files = np.array(glob("example_dataset/*"))

# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
print('There are %d total example images.' % len(example_files))
There are 13233 total human images.
There are 8351 total dog images.
There are 15 total example images.

Step 1: Detect Humans

In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [7]:
import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [8]:
# display faces detected, placing rectangle around each human face within the image
def display_faces_with_rectangle_around_face_cv(full_img, faces_in_images, rect_color=(0,255,0)):  
    '''
    Displays a rectangle around the detected faces
    
    Args:
        img: an image
        faces_in_images: list of face locations within the given 
                         image with the following tuple (x, y, width, and height) 
                         of each face 
        rect_color: color of rectangle in rgb tuple 
                    Defaults to (0, 255, 0) Green
    Returns:
        None
    '''
    
    for (x,y,w,h) in faces_in_images:
        
        # add bounding box to color image
        cv2.rectangle(full_img, (x,y), (x+w,y+h), rect_color, 2) # takes in  
    
    # convert BGR image to RGB for plotting
    cv_rgb = cv2.cvtColor(full_img, cv2.COLOR_BGR2RGB)
    
    # display the image, along with bounding box
    plt.imshow(cv_rgb)
    plt.show()

    
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path, display=False):
    '''
    Detects a human face
    Args:
        img_path: a path to an image
        display: display image with rectangles around detected faces if True (bool)
        
    Returns:
        if there is a human face in the image (bool)
    '''
    # Reads in image
    img = cv2.imread(img_path)
    # converts image to Gray Scale from OpenCV's Default BGR image object
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
    # detects human faces
    faces = face_cascade.detectMultiScale(gray)

    # display faces 
    if display:
        display_faces_with_rectangle_around_face_cv(img, faces)
    return len(faces) > 0
In [16]:
example_count = 10

print("Image Display Test:")
results = face_detector(example_files[0], display=True)
print(results)
print()
print("Example Dataset:")
for example_img_path in example_files:
    results = face_detector(example_img_path)
    print(results)

print()
print("Human Dataset:")
for human_img_path in human_files[:example_count]:
    results = face_detector(human_img_path)
    print(results)

print()
print("Dog Dataset:")
for dog_img_path in dog_files[:example_count]:
    results = face_detector(dog_img_path)
    if results:
        print("{}\tUse the commented function 'display_faces_with_rectangle_around_face' to further understand results.\n\tFound in 'face_detection declaration.'".format(results))
    else:
        print(results)
Image Display Test:
True

Example Dataset:
True
False
True
False
False
False
True
True
False
False
True
True
False
False
False

Human Dataset:
True
True
True
True
True
True
True
True
True
True

Dog Dataset:
False
False
True	Use the commented function 'display_faces_with_rectangle_around_face' to further understand results.
	Found in 'face_detection declaration.'
False
False
False
False
False
False
False

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer: (You can print out your results and/or write your percentages in this cell)

100%|██████████| 100/100 [00:01<00:00, 59.84it/s]
Percent of faces detected in a sample of the Human Dataset: 99.0% - time: 0:00:01.651157
100%|██████████| 100/100 [00:07<00:00, 13.18it/s]
Percent of faces detected in a sample of the Dog Dataset:   18.0% - time: 0:00:07.588281
In [17]:
from tqdm import tqdm
from datetime import datetime

human_files_short = human_files[:100]
dog_files_short = dog_files[:100]

#-#-# Do NOT modify the code above this line. #-#-#
human_faces_detected_in_human_files = 0
human_faces_detected_in_dog_files = 0

startTime = datetime.now()
for image_path in tqdm(human_files_short):
    if face_detector(image_path):
        human_faces_detected_in_human_files += 1
percent_of_human_faces_detected_in_human_files = human_faces_detected_in_human_files/len(human_files_short)*100
print("Percent of faces detected in a sample of the Human Dataset:\t{}% - time: {}".format(percent_of_human_faces_detected_in_human_files, datetime.now() - startTime))

startTime = datetime.now()
for image_path in tqdm(dog_files_short):
    if face_detector(image_path):
        human_faces_detected_in_dog_files += 1
percent_of_human_faces_detected_in_dog_files = human_faces_detected_in_dog_files/len(dog_files_short)*100
print("Percent of faces detected in a sample of the Dog Dataset:\t{}% - time: {}".format(percent_of_human_faces_detected_in_dog_files, datetime.now() - startTime))
100%|██████████| 100/100 [00:01<00:00, 62.89it/s]
  2%|▏         | 2/100 [00:00<00:07, 13.99it/s]
Percent of faces detected in a sample of the Human Dataset:	99.0% - time: 0:00:01.592262
100%|██████████| 100/100 [00:07<00:00, 12.74it/s]
Percent of faces detected in a sample of the Dog Dataset:	18.0% - time: 0:00:07.849297

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [14]:
### (Optional) 
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
import face_recognition
def display_faces_with_rectangle_around_face_fr(img, faces_in_images, rect_color=(0,255,0)):  
    '''
    Displays a rectangle around the detected faces
    
    Args:
        img: an image
        faces_in_images: list of face locations within the given 
                         image with the following tuple (x, y, width, and height) 
                         of each face 
        rect_color: color of rectangle in rgb tuple 
                    Defaults to (0, 255, 0) Green
    Returns:
        None
    '''
    for (x,y,w,h) in faces_in_images:
        # add bounding box to color image
        cv2.rectangle(img, (y, x), (h, w), rect_color, 2)
  
    # display the image, along with bounding box
    plt.imshow(img)
    plt.show()
    
    
# returns "True" if face is detected in image stored at img_path
def face_detector_fr(img_path, display=False):        
    '''
    Detects a human face
    Args:
        img_path: a path to an image
        display: display image with rectangles around detected faces if True (bool)

    Returns:
        if there is a human face in the image (bool)
    '''
    # Reads in image
    img = face_recognition.load_image_file(img_path)

    # detects human faces
    faces = face_recognition.face_locations(img)

    # display faces
    if display:
        display_faces_with_rectangle_around_face_fr(img, faces)
    return len(faces) > 0
Image Display Test:
True

Example Dataset:
True
False
True
False
False
False
False
True
False
False
True
True
False
False
False

Human Dataset:
True
True
True
True
True
True
True
True
True
True

Dog Dataset:
False
False
True	Use the commented function 'display_faces_with_rectangle_around_face' to further understand results.
	Found in 'face_detection declaration.'
False
False
False
False
False
False
False
In [18]:
example_count = 10

print("Image Display Test:")
results = face_detector_fr(example_files[0], display=True)
print(results)
print()
print("Example Dataset:")
for human_img_path in example_files:
    results = face_detector_fr(human_img_path)
    print(results)

print()
print("Human Dataset:")
for human_img_path in human_files[:example_count]:
    results = face_detector_fr(human_img_path)
    print(results)

print()
print("Dog Dataset:")
for dog_img_path in dog_files[:example_count]:
    results = face_detector_fr(dog_img_path)
    if results:
        print("{}\tUse the commented function 'display_faces_with_rectangle_around_face' to further understand results.\n\tFound in 'face_detection declaration.'".format(results))
    else:
        print(results)
Image Display Test:
True

Example Dataset:
True
False
True
False
False
False
False
True
False
False
True
True
False
False
False

Human Dataset:
True
True
True
True
True
True
True
True
True
True

Dog Dataset:
False
False
True	Use the commented function 'display_faces_with_rectangle_around_face' to further understand results.
	Found in 'face_detection declaration.'
False
False
False
False
False
False
False
In [19]:
human_faces_detected_in_human_files = 0
human_faces_detected_in_dog_files = 0

startTime = datetime.now()
for image_path in tqdm(human_files_short):
    if face_detector_fr(image_path):
        human_faces_detected_in_human_files += 1
percent_of_human_faces_detected_in_human_files = human_faces_detected_in_human_files/len(human_files_short)*100
print("Percent of faces detected in a sample of the Human Dataset:\t{}% - time: {}".format(percent_of_human_faces_detected_in_human_files, datetime.now() - startTime))

startTime = datetime.now()
for image_path in tqdm(dog_files_short):
    if face_detector_fr(image_path):
        human_faces_detected_in_dog_files += 1

percent_of_human_faces_detected_in_dog_files = human_faces_detected_in_dog_files/len(dog_files_short)*100
print("Percent of faces detected in a sample of the Dog Dataset:\t{}%% - time: {}".format(percent_of_human_faces_detected_in_dog_files, datetime.now() - startTime))
100%|██████████| 100/100 [00:03<00:00, 28.63it/s]
  1%|          | 1/100 [00:00<00:15,  6.25it/s]
Percent of faces detected in a sample of the Human Dataset:	99.0% - time: 0:00:03.495616
100%|██████████| 100/100 [00:19<00:00,  1.10it/s]
Percent of faces detected in a sample of the Dog Dataset:	10.0%% - time: 0:00:19.458379


Step 2: Detect Dogs

In this section, we use a pre-trained model to detect dogs in images.

Obtain Pre-trained VGG-16 Model

The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.

In [20]:
import torch
import torchvision.models as models

# define VGG16 model
VGG16 = models.vgg16(pretrained=True)

# check if CUDA is available
use_cuda = torch.cuda.is_available()

# move model to GPU if CUDA is available
if use_cuda:
    VGG16 = VGG16.cuda()

Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.

(IMPLEMENTATION) Making Predictions with a Pre-trained Model

In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.

Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.

In [35]:
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

import torchvision.transforms as transforms

# Load and pre-process an image from the given img_path
def img_path_to_tensor(img_path):
    '''
    Opens an image file from its path and transforms with 
    resize and to tensor
    
    Args:
        img_path: path to an image
        
    Returns:
        A transformed image tensor
    '''
    # open image
    img = Image.open(img_path)
    
    # transform and convert to tensor
    img_transformer = transforms.Compose([
        transforms.Resize(224),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize(
            mean=[0.485, 0.456, 0.406],
            std=[0.229, 0.224, 0.225]
        )
    ])
    
    return img_transformer(img)


# Return the *index* of the predicted class for that image
def VGG16_predict(img_path):
    '''
    Use pre-trained VGG-16 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to VGG-16 model's prediction
    '''    
    # Sets the model in evaluation mode.
    VGG16.eval()
    
    # loads image, transforms, and converts to tensor
    img_tensor = img_path_to_tensor(img_path)
    
    # returns a new tensor with a dimension of size one.
    img_ready = img_tensor.unsqueeze(0)
    
    # no gradient calculated
    with torch.no_grad():
        # returns VGG16 models evaluation of image 
        results = VGG16(img_ready)
        # returns predicted class index with the highest (max) probability for correctness
        final_results = results.data.numpy().argmax()      

    return final_results # predicted class index
In [22]:
# test VGG16_predict(img_path)
print(VGG16_predict(dog_files[0]))
print(VGG16_predict(dog_files[1]))
print(VGG16_predict(dog_files[2]))
164
242
166

(IMPLEMENTATION) Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).

Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [23]:
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
    '''
    Uses a VGG-16 model's prediction to determine if a dog is detected.
    The reuslts of the model is a int ranging from 0 to 999.
    If the model's results are between 151 and 268 then return True else False.
    
    Args:
        img_path: path to an image
        
    Returns:
        Bool (results are between 151 and 268 then return True else False.)
    '''
    # runs VGG16 predictor that pre-processes the image and evaluates with the model, returning and index
    vgg16_results = VGG16_predict(img_path)
    
    # indexes ranging from 151 to 268 are all dogs from Chihuahua to Mexican hairless 
    if 151 <= vgg16_results <= 268:
        return True
    return False
In [24]:
# test VGG16_predict(img_path)
print(dog_detector(dog_files[0]))
print(dog_detector(human_files[0]))
print(dog_detector(dog_files[1]))
print(dog_detector(human_files[1]))
print(dog_detector(dog_files[2]))
print(dog_detector(human_files[2]))
True
False
True
False
True
False

(IMPLEMENTATION) Assess the Dog Detector

Question 2: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:

100%|██████████| 100/100 [00:37<00:00,  2.28it/s]
Percent of dogs detected in a sample of the Human Dataset:  0.0% - time: 0:00:37.419369
100%|██████████| 100/100 [00:45<00:00,  2.22it/s]
Percent of dogs detected in a sample of the Dog Dataset:    97.0% - time: 0:00:45.448215
In [25]:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
dogs_detected_in_human_files = 0
dogs_detected_in_dog_files = 0

startTime = datetime.now()
for image_path in tqdm(human_files_short):
    if dog_detector(image_path):
        dogs_detected_in_human_files += 1
percent_of_dogs_detected_in_human_files = dogs_detected_in_human_files/len(human_files_short)*100
print("Percent of dogs detected in a sample of the Human Dataset:\t{}% - time: {}".format(percent_of_dogs_detected_in_human_files, datetime.now() - startTime))

startTime = datetime.now()
for image_path in tqdm(dog_files_short):
    if dog_detector(image_path):
        dogs_detected_in_dog_files += 1
percent_of_dogs_detected_in_dog_files = dogs_detected_in_dog_files/len(dog_files_short)*100
print("Percent of dogs detected in a sample of the Dog Dataset:\t{}% - time: {}".format(percent_of_dogs_detected_in_dog_files, datetime.now() - startTime))
100%|██████████| 100/100 [00:37<00:00,  2.28it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
Percent of dogs detected in a sample of the Human Dataset:	0.0% - time: 0:00:37.419369
100%|██████████| 100/100 [00:45<00:00,  2.22it/s]
Percent of dogs detected in a sample of the Dog Dataset:	97.0% - time: 0:00:45.448215

We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [26]:
### (Optional) 
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.

# Return the *index* of the predicted class for the given image path and CNN model
def cnn_model_predict(img_path, model=None):
    '''
    Use pre-trained model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        model: cnn model
        
    Returns:
        Index corresponding to model's prediction
    '''

    if not model:
        model = models.vgg16(pretrained=True)

    # sets the model in evaluation mode.
    model.eval()
    
    # loads image, transforms, and converts to tensor
    img_tensor = img_path_to_tensor(img_path)
    
    # returns a new tensor with a dimension of size one.
    img_ready = img_tensor.unsqueeze(0)
    
    # no gradient calculated
    with torch.no_grad():
        # returns VGG16 models evaluation of image 
        results = model(img_ready)
        # returns predicted class index with the highest (max) probability for correctness
        final_results = results.data.numpy().argmax()      

    return final_results # predicted class index


# returns "True" if a dog is detected in the image stored at img_path
def dog_detector_m(img_path, model):
    '''
    Uses a model's prediction to determine if a dog is detected.
    The reuslts of the model is an int ranging from 0 to 999.
    If the model's results are between 151 and 268 then return True else False.
    
    Args:
        img_path: path to an image
        model: cnn model

    Returns:
        Bool (results are between 151 and 268 then return True else False.)
    '''
    
    if not model:
        model = models.vgg16(pretrained=True)
        
    # runs model predictor that pre-processes the image and evaluates with the model, returning and index    
    model_results = cnn_model_predict(img_path, model)
    
    # indexes ranging from 151 to 268 are all dogs from Chihuahua to Mexican hairless,
    if 151 <= model_results <= 268:
        return True
    return False
In [27]:
model_dict = {
    "vgg16": models.vgg16(pretrained=True),
    "vgg16_bt": models.vgg16_bn(pretrained=True),
    "vgg19": models.vgg19(pretrained=True),
    "vgg19_bt": models.vgg19_bn(pretrained=True),
    "resnet18": models.resnet18(pretrained=True),
    "alexnet": models.alexnet(pretrained=True),
    "squeezenet1_0": models.squeezenet1_0(pretrained=True),
    "densenet161": models.densenet161(pretrained=True),
}

for model_name, model in model_dict.items():    
    dogs_detected_in_human_files = 0
    dogs_detected_in_dog_files = 0
    
    startTime = datetime.now()
    for image_path in tqdm(human_files_short):
        if dog_detector_m(image_path, model):
            dogs_detected_in_human_files += 1
    percent_of_dogs_detected_in_human_files = dogs_detected_in_human_files/len(human_files_short)*100
    print("While using a {} model, the percent of dogs classified in a sample of the human dataset:\t{}% - time: {}".format(model_name, percent_of_dogs_detected_in_human_files, datetime.now() - startTime))

    startTime = datetime.now()
    for image_path in tqdm(dog_files_short):
        if dog_detector_m(image_path, model):
            dogs_detected_in_dog_files += 1
    percent_of_dogs_detected_in_dog_files = dogs_detected_in_dog_files/len(dog_files_short)*100
    print("While using a {} model, the percent of dogs classified in a sample of the dog dataset:\t{}% - time: {}".format(model_name, percent_of_dogs_detected_in_dog_files, datetime.now() - startTime))
/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/squeezenet.py:94: UserWarning: nn.init.kaiming_uniform is now deprecated in favor of nn.init.kaiming_uniform_.
/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/squeezenet.py:92: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
100%|██████████| 100/100 [00:35<00:00,  2.84it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg16 model, the percent of dogs classified in a sample of the human dataset:	0.0% - time: 0:00:35.240371
100%|██████████| 100/100 [00:40<00:00,  2.21it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg16 model, the percent of dogs classified in a sample of the dog dataset:	97.0% - time: 0:00:40.838914
100%|██████████| 100/100 [00:38<00:00,  2.66it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg16_bt model, the percent of dogs classified in a sample of the human dataset:	0.0% - time: 0:00:38.912230
100%|██████████| 100/100 [00:39<00:00,  2.34it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg16_bt model, the percent of dogs classified in a sample of the dog dataset:	97.0% - time: 0:00:39.667721
100%|██████████| 100/100 [00:47<00:00,  2.11it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg19 model, the percent of dogs classified in a sample of the human dataset:	0.0% - time: 0:00:47.629621
100%|██████████| 100/100 [00:47<00:00,  2.06it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg19 model, the percent of dogs classified in a sample of the dog dataset:	96.0% - time: 0:00:47.325629
100%|██████████| 100/100 [00:46<00:00,  1.82it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a vgg19_bt model, the percent of dogs classified in a sample of the human dataset:	1.0% - time: 0:00:46.552943
100%|██████████| 100/100 [00:51<00:00,  1.93it/s]
  2%|▏         | 2/100 [00:00<00:07, 13.01it/s]
While using a vgg19_bt model, the percent of dogs classified in a sample of the dog dataset:	96.0% - time: 0:00:51.733938
100%|██████████| 100/100 [00:07<00:00, 13.94it/s]
  2%|▏         | 2/100 [00:00<00:08, 11.75it/s]
While using a resnet18 model, the percent of dogs classified in a sample of the human dataset:	1.0% - time: 0:00:07.015566
100%|██████████| 100/100 [00:08<00:00,  9.00it/s]
  3%|▎         | 3/100 [00:00<00:03, 28.11it/s]
While using a resnet18 model, the percent of dogs classified in a sample of the dog dataset:	98.0% - time: 0:00:08.635723
100%|██████████| 100/100 [00:03<00:00, 28.72it/s]
  2%|▏         | 2/100 [00:00<00:04, 19.98it/s]
While using a alexnet model, the percent of dogs classified in a sample of the human dataset:	1.0% - time: 0:00:03.482922
100%|██████████| 100/100 [00:05<00:00, 12.98it/s]
  2%|▏         | 2/100 [00:00<00:05, 18.61it/s]
While using a alexnet model, the percent of dogs classified in a sample of the dog dataset:	98.0% - time: 0:00:05.054285
100%|██████████| 100/100 [00:05<00:00, 19.29it/s]
  2%|▏         | 2/100 [00:00<00:07, 13.68it/s]
While using a squeezenet1_0 model, the percent of dogs classified in a sample of the human dataset:	5.0% - time: 0:00:05.185130
100%|██████████| 100/100 [00:07<00:00,  9.89it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a squeezenet1_0 model, the percent of dogs classified in a sample of the dog dataset:	97.0% - time: 0:00:07.660726
100%|██████████| 100/100 [01:10<00:00,  1.44it/s]
  0%|          | 0/100 [00:00<?, ?it/s]
While using a densenet161 model, the percent of dogs classified in a sample of the human dataset:	0.0% - time: 0:01:10.794567
100%|██████████| 100/100 [01:12<00:00,  1.37it/s]
While using a densenet161 model, the percent of dogs classified in a sample of the dog dataset:	94.0% - time: 0:01:12.373338


Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!

In [31]:
import os
from torchvision import datasets

### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes

data_loaders_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.RandomRotation(10),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'valid': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'test': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
}

batch_size = 20
num_workers = 5
data_dir = 'dogImages/'

img_datasets = {}
loaders_scratch = {}
for data_loader_type in data_loaders_transforms.keys():
    # path to each type of data (train, valid, test)
    dir_path = os.path.join(data_dir, data_loader_type)
    
    # transform data using ImageFolder
    img_datasets[data_loader_type] = datasets.ImageFolder(dir_path, data_loaders_transforms[data_loader_type])

    # prepare data loaders
    loaders_scratch[data_loader_type] = torch.utils.data.DataLoader(img_datasets[data_loader_type], batch_size=batch_size, shuffle=True, num_workers=num_workers)

Question 3: Describe your chosen procedure for preprocessing the data.

  • How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
  • Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?

Answer:

I had almost different methods for all the different datasets (train, valid, test). For training the data, we want to modify and augment the images, so I used RandomResizedCrop, RandomHorizontalFlip, RandomRotation. Image augmentation will create randomness throughout the dataset to prevent overfitting the model with the dataset. RandomResizedCrop also takes care of resizing the image to 224. For validation and testing the data, we don't need to augment the images, but we do need to resize them. I resize to 256 but then center crop the image to a size of 224 like the training images. The image size 224 x 224 is choosen because it is an optimal size for my architecture where I use a max pooling with kernel size of 2, stride size of 2, and padding size of 0.

After augmenting and/or resizing images to a 224 x 224, We turn the image into a tensor and normalize the tenosor using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. All pre-trained models expect the input images to be normalized the same exact way. In this case, mini-batches of with 3-channel RGB images have a shape 3 x H x W, where H and W should to be at least 224. The mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] come from the imagenet dataset.

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. Use the template in the code cell below.

In [32]:
import torch.nn as nn
import torch.nn.functional as F

# CNN architecture
class Net(nn.Module):
    def __init__(self, class_count):
        super(Net, self).__init__()
        
        network_layers = [3, 8, 16, 32, 64, 128]
        kernel_size = 3
        padding = 1

        # convolutional layer (sees 224x224x3 image tensor)
        # size input 224
        self.conv1 = nn.Conv2d(network_layers[0], network_layers[1], kernel_size=kernel_size, padding=padding)
        self.conv2 = nn.Conv2d(network_layers[1], network_layers[2], kernel_size=kernel_size, padding=padding)
        self.conv3 = nn.Conv2d(network_layers[2], network_layers[3], kernel_size=kernel_size, padding=padding)
        self.conv4 = nn.Conv2d(network_layers[3], network_layers[4], kernel_size=kernel_size, padding=padding)
        self.conv5 = nn.Conv2d(network_layers[4], network_layers[5], kernel_size=kernel_size, padding=padding)
        
        # max pooling layer
        self.pool = nn.MaxPool2d(2, 2)
        
        # linear layers
        # (128 * 7 * 7 = 6,272 --> 4,096)
        self.fc1 = nn.Linear(128 * 7 * 7, 4096)
        # (4096 --> 1,024)
        self.fc2 = nn.Linear(4096, 1024)
        # (1,024 --> 133)
        self.fc3 = nn.Linear(1024, class_count)
        
        # dropout layer (p=0.5) 50%
        self.dropout = nn.Dropout(p=0.5)
    
    def forward(self, x):
     
        #First we define CNN layers and we use relu activation
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = self.pool(F.relu(self.conv3(x)))
        x = self.pool(F.relu(self.conv4(x)))
        x = self.pool(F.relu(self.conv5(x)))
        
        #flatten the image for FC layers
        x = x.view(x.size(0), -1)

        #fully connected layers as classifier
        x = self.dropout(x)
        x = F.relu(self.fc1(x))
        x = self.dropout(x)
        x = F.relu(self.fc2(x))
        x = self.dropout(x)
        x = self.fc3(x)
        return x

dog_classes = img_datasets['train'].classes
class_count = len(dog_classes)

# instantiate the CNN
model_scratch = Net(class_count)

# move tensors to GPU if CUDA is available
if use_cuda:
    model_scratch.cuda()

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.

Answer:

I wanted to keep this simple since the best method is to just use a pretrained model, which I planned to use the VGG16 again even though I think the resnet18 and alexnet might preform better. I don't have the compute resources to testthis theory of might so I'll stick to what I know.

As for the convolutional neural network built from scratch, I started my choosing my networks convolutional input layer sizes starting with 3 since we have an image with 3 color channels. With my image transformation and image resize of 224 x 224, this makes my CNN's input 224 x 224 x 3 (a depth of 3).

My first real choice is to make a Conv2D layer, which are normally used for images, with that input channel of 3, output channel of 8, a kernel size of 3, padding of 1, and default stride of 1. This will keep my output spacial dimesions the same while increasing my output depth since it's calculated using:

(W - F + 2P)/S + 1

I take my first Conv2d and pass it directly into a relu activation function to alleviate the vanishing gradient problem. This is an issue where layers of the network can begin to train slower and slower since the gradient will decrease exponentially through the layers. The relu function will take any negative number and replace it with a 0, increasing speeds.

Next I pass the relu output to a maxpooling layer with a size of 2x2 and stride of 2. After we identify specific features from the convolutional layer, I use a maxpooling layer to extract relevent feature locations while significantly reducing spatial dimentions, in this case by a factor of 2, thus reducing number of computations for the next conv2d layer.

I continue to use the same conv2d, relu, and maxpooling functions, only changing the output channel as I pass through this convolutional layer pipeline. The following are the input channels and spatial dimentions with corresponding output channels and spatial dimentions for each convolutional layer pipeline iteration. I stop with 5 layers to keep a simple and clean spatial dimentions of 7 X 7.

pipeline iteration input spatial dimentions and channels output spatial dimentions and channels
1 224 x 224 x 3 112 x 112 x 8
2 112 x 112 x 8 56 x 56 x 16
3 56 x 56 x 16 28 x 28 x 32
4 28 x 28 x 32 14 x 14 x 64
5 14 x 14 x 64 7 x 7 x 128

After the convolutional layer pipeline outputted the image with a spatial dimention and channel of 7 x 7 x 128, I flattened it into a vector. I did this to pass the image into a new pipeline of fully connected linear layers, dropouts and more relu functions. The new pipeline starts with a dropout functions with a probability of 50%, reducing overfitting. Then, passing the image to a Linear function to reduce the flattened image's feature count. Finally, using a relu activation fuction to again relieve the vanishing gradient problem but replacing negatives with zeros to increse speed and reduce overfitting.

Linear ƒ iteration input feature count output feature count
1 7 x 7 x 128 = 6272 4096
2 4096 1024
3 1024 133

Lastly, A relu function is not used because we want the probability that the feature class is found and this is equal to a total of 1.0. If I place negatives with zeros that doen't do anything for me but cripple the features probablity. Plus its would be a waste of time. I just need the largest probability of all the feature classes to output to the user.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.

In [33]:
import torch.optim as optim
learning_rate = 0.01

### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()

### TODO: select optimizer
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=learning_rate)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.

In [36]:
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
    """returns trained model"""
    # initialize tracker for minimum validation loss
    valid_loss_min = np.Inf 
    
    for epoch in range(1, n_epochs+1):
        # initialize variables to monitor training and validation loss
        train_loss = 0.0
        valid_loss = 0.0
        
        ###################
        # train the model #
        ###################
        for batch_idx, (data, target) in enumerate(loaders['train']):
            # move tensors to GPU if CUDA is available
            if use_cuda:
                data, target = data.cuda(), target.cuda()
            
            # clear the gradients of all optimized variables
            optimizer.zero_grad()
            
            # forward pass: compute predicted outputs by passing inputs to the model
            output = model(data)
            
            # calculate the batch loss
            loss = criterion(output, target)
            
            # backward pass: compute gradient of the loss with respect to model parameters
            loss.backward()
            
            # perform a single optimization step (parameter update)
            optimizer.step()
            
            # record the average training loss, using something like
            train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))

        ######################    
        # validate the model #
        ######################
        for batch_idx, (data, target) in enumerate(loaders['valid']):
            # move tensors to GPU if CUDA is available
            if use_cuda:
                data, target = data.cuda(), target.cuda()

            # forward pass: compute predicted outputs by passing inputs to the model
            output = model(data)
            
            # calculate the batch loss
            loss = criterion(output, target)
            
            # record the average validation loss, using something like
            valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))

        # print training/validation statistics 
        print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
            epoch, 
            train_loss,
            valid_loss
            ))
        
        # save the model if validation loss has decreased
        if valid_loss <= valid_loss_min:
            print('Validation loss decreased ({:.6f} --> {:.6f}).  Saving model ...'.format(
            valid_loss_min,
            valid_loss))
            torch.save(model.state_dict(), save_path)
            valid_loss_min = valid_loss

    # return trained model
    return model

# train the model
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch, 
                      criterion_scratch, use_cuda, 'model_scratch.pt')

# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
Epoch: 1 	Training Loss: 4.889633 	Validation Loss: 4.888414
Validation loss decreased (inf --> 4.888414).  Saving model ...
Epoch: 2 	Training Loss: 4.887853 	Validation Loss: 4.886799
Validation loss decreased (4.888414 --> 4.886799).  Saving model ...
Epoch: 3 	Training Loss: 4.885952 	Validation Loss: 4.884846
Validation loss decreased (4.886799 --> 4.884846).  Saving model ...
Epoch: 4 	Training Loss: 4.883877 	Validation Loss: 4.883866
Validation loss decreased (4.884846 --> 4.883866).  Saving model ...
Epoch: 5 	Training Loss: 4.881832 	Validation Loss: 4.881224
Validation loss decreased (4.883866 --> 4.881224).  Saving model ...
Epoch: 6 	Training Loss: 4.878717 	Validation Loss: 4.876428
Validation loss decreased (4.881224 --> 4.876428).  Saving model ...
Epoch: 7 	Training Loss: 4.874948 	Validation Loss: 4.875340
Validation loss decreased (4.876428 --> 4.875340).  Saving model ...
Epoch: 8 	Training Loss: 4.871449 	Validation Loss: 4.870945
Validation loss decreased (4.875340 --> 4.870945).  Saving model ...
Epoch: 9 	Training Loss: 4.867643 	Validation Loss: 4.871206
Epoch: 10 	Training Loss: 4.867172 	Validation Loss: 4.867533
Validation loss decreased (4.870945 --> 4.867533).  Saving model ...
Epoch: 11 	Training Loss: 4.865650 	Validation Loss: 4.867508
Validation loss decreased (4.867533 --> 4.867508).  Saving model ...
Epoch: 12 	Training Loss: 4.865297 	Validation Loss: 4.862556
Validation loss decreased (4.867508 --> 4.862556).  Saving model ...
Epoch: 13 	Training Loss: 4.863268 	Validation Loss: 4.860785
Validation loss decreased (4.862556 --> 4.860785).  Saving model ...
Epoch: 14 	Training Loss: 4.860407 	Validation Loss: 4.859731
Validation loss decreased (4.860785 --> 4.859731).  Saving model ...
Epoch: 15 	Training Loss: 4.858593 	Validation Loss: 4.859847
Epoch: 16 	Training Loss: 4.855083 	Validation Loss: 4.846365
Validation loss decreased (4.859731 --> 4.846365).  Saving model ...
Epoch: 17 	Training Loss: 4.847087 	Validation Loss: 4.838223
Validation loss decreased (4.846365 --> 4.838223).  Saving model ...
Epoch: 18 	Training Loss: 4.836901 	Validation Loss: 4.814316
Validation loss decreased (4.838223 --> 4.814316).  Saving model ...
Epoch: 19 	Training Loss: 4.817021 	Validation Loss: 4.779703
Validation loss decreased (4.814316 --> 4.779703).  Saving model ...
Epoch: 20 	Training Loss: 4.785621 	Validation Loss: 4.741714
Validation loss decreased (4.779703 --> 4.741714).  Saving model ...
Epoch: 21 	Training Loss: 4.761654 	Validation Loss: 4.704464
Validation loss decreased (4.741714 --> 4.704464).  Saving model ...
Epoch: 22 	Training Loss: 4.732613 	Validation Loss: 4.692503
Validation loss decreased (4.704464 --> 4.692503).  Saving model ...
Epoch: 23 	Training Loss: 4.729320 	Validation Loss: 4.659220
Validation loss decreased (4.692503 --> 4.659220).  Saving model ...
Epoch: 24 	Training Loss: 4.713345 	Validation Loss: 4.639388
Validation loss decreased (4.659220 --> 4.639388).  Saving model ...
Epoch: 25 	Training Loss: 4.688850 	Validation Loss: 4.607676
Validation loss decreased (4.639388 --> 4.607676).  Saving model ...
Epoch: 26 	Training Loss: 4.683287 	Validation Loss: 4.616062
Epoch: 27 	Training Loss: 4.678403 	Validation Loss: 4.637399
Epoch: 28 	Training Loss: 4.655400 	Validation Loss: 4.592434
Validation loss decreased (4.607676 --> 4.592434).  Saving model ...
Epoch: 29 	Training Loss: 4.644466 	Validation Loss: 4.578986
Validation loss decreased (4.592434 --> 4.578986).  Saving model ...
Epoch: 30 	Training Loss: 4.626159 	Validation Loss: 4.551421
Validation loss decreased (4.578986 --> 4.551421).  Saving model ...
Epoch: 31 	Training Loss: 4.609717 	Validation Loss: 4.536491
Validation loss decreased (4.551421 --> 4.536491).  Saving model ...
Epoch: 32 	Training Loss: 4.586445 	Validation Loss: 4.449104
Validation loss decreased (4.536491 --> 4.449104).  Saving model ...
Epoch: 33 	Training Loss: 4.552904 	Validation Loss: 4.430554
Validation loss decreased (4.449104 --> 4.430554).  Saving model ...
Epoch: 34 	Training Loss: 4.526196 	Validation Loss: 4.397543
Validation loss decreased (4.430554 --> 4.397543).  Saving model ...
Epoch: 35 	Training Loss: 4.504586 	Validation Loss: 4.372711
Validation loss decreased (4.397543 --> 4.372711).  Saving model ...
Epoch: 36 	Training Loss: 4.495601 	Validation Loss: 4.392441
Epoch: 37 	Training Loss: 4.479960 	Validation Loss: 4.316556
Validation loss decreased (4.372711 --> 4.316556).  Saving model ...
Epoch: 38 	Training Loss: 4.468126 	Validation Loss: 4.329301
Epoch: 39 	Training Loss: 4.442003 	Validation Loss: 4.308263
Validation loss decreased (4.316556 --> 4.308263).  Saving model ...
Epoch: 40 	Training Loss: 4.438858 	Validation Loss: 4.299715
Validation loss decreased (4.308263 --> 4.299715).  Saving model ...
Epoch: 41 	Training Loss: 4.421660 	Validation Loss: 4.280490
Validation loss decreased (4.299715 --> 4.280490).  Saving model ...
Epoch: 42 	Training Loss: 4.404879 	Validation Loss: 4.268818
Validation loss decreased (4.280490 --> 4.268818).  Saving model ...
Epoch: 43 	Training Loss: 4.382197 	Validation Loss: 4.220548
Validation loss decreased (4.268818 --> 4.220548).  Saving model ...
Epoch: 44 	Training Loss: 4.353415 	Validation Loss: 4.200757
Validation loss decreased (4.220548 --> 4.200757).  Saving model ...
Epoch: 45 	Training Loss: 4.345822 	Validation Loss: 4.219476
Epoch: 46 	Training Loss: 4.343834 	Validation Loss: 4.163628
Validation loss decreased (4.200757 --> 4.163628).  Saving model ...
Epoch: 47 	Training Loss: 4.319974 	Validation Loss: 4.166594
Epoch: 48 	Training Loss: 4.311376 	Validation Loss: 4.173990
Epoch: 49 	Training Loss: 4.293885 	Validation Loss: 4.128272
Validation loss decreased (4.163628 --> 4.128272).  Saving model ...
Epoch: 50 	Training Loss: 4.278637 	Validation Loss: 4.090672
Validation loss decreased (4.128272 --> 4.090672).  Saving model ...
Epoch: 51 	Training Loss: 4.262142 	Validation Loss: 4.079067
Validation loss decreased (4.090672 --> 4.079067).  Saving model ...
Epoch: 52 	Training Loss: 4.249563 	Validation Loss: 4.067878
Validation loss decreased (4.079067 --> 4.067878).  Saving model ...
Epoch: 53 	Training Loss: 4.224242 	Validation Loss: 4.199427
Epoch: 54 	Training Loss: 4.216682 	Validation Loss: 4.057527
Validation loss decreased (4.067878 --> 4.057527).  Saving model ...
Epoch: 55 	Training Loss: 4.187021 	Validation Loss: 4.000546
Validation loss decreased (4.057527 --> 4.000546).  Saving model ...
Epoch: 56 	Training Loss: 4.180863 	Validation Loss: 4.038802
Epoch: 57 	Training Loss: 4.143993 	Validation Loss: 3.998282
Validation loss decreased (4.000546 --> 3.998282).  Saving model ...
Epoch: 58 	Training Loss: 4.149378 	Validation Loss: 4.050473
Epoch: 59 	Training Loss: 4.112382 	Validation Loss: 3.922210
Validation loss decreased (3.998282 --> 3.922210).  Saving model ...
Epoch: 60 	Training Loss: 4.109028 	Validation Loss: 3.968907
Epoch: 61 	Training Loss: 4.103728 	Validation Loss: 3.925787
Epoch: 62 	Training Loss: 4.071875 	Validation Loss: 3.935226
Epoch: 63 	Training Loss: 4.070127 	Validation Loss: 3.928869
Epoch: 64 	Training Loss: 4.039280 	Validation Loss: 4.024936
Epoch: 65 	Training Loss: 4.015461 	Validation Loss: 3.920143
Validation loss decreased (3.922210 --> 3.920143).  Saving model ...
Epoch: 66 	Training Loss: 4.012515 	Validation Loss: 3.847713
Validation loss decreased (3.920143 --> 3.847713).  Saving model ...
Epoch: 67 	Training Loss: 4.008005 	Validation Loss: 3.855533
Epoch: 68 	Training Loss: 3.983671 	Validation Loss: 3.873065
Epoch: 69 	Training Loss: 3.972114 	Validation Loss: 3.865235
Epoch: 70 	Training Loss: 3.950836 	Validation Loss: 3.892903
Epoch: 71 	Training Loss: 3.935540 	Validation Loss: 3.796454
Validation loss decreased (3.847713 --> 3.796454).  Saving model ...
Epoch: 72 	Training Loss: 3.924342 	Validation Loss: 3.754345
Validation loss decreased (3.796454 --> 3.754345).  Saving model ...
Epoch: 73 	Training Loss: 3.913437 	Validation Loss: 3.784689
Epoch: 74 	Training Loss: 3.880569 	Validation Loss: 3.807155
Epoch: 75 	Training Loss: 3.854890 	Validation Loss: 3.831982
Epoch: 76 	Training Loss: 3.850501 	Validation Loss: 3.735996
Validation loss decreased (3.754345 --> 3.735996).  Saving model ...
Epoch: 77 	Training Loss: 3.840309 	Validation Loss: 3.789283
Epoch: 78 	Training Loss: 3.843007 	Validation Loss: 3.772360
Epoch: 79 	Training Loss: 3.806714 	Validation Loss: 3.686563
Validation loss decreased (3.735996 --> 3.686563).  Saving model ...
Epoch: 80 	Training Loss: 3.774921 	Validation Loss: 3.701259
Epoch: 81 	Training Loss: 3.782540 	Validation Loss: 3.688754
Epoch: 82 	Training Loss: 3.772502 	Validation Loss: 3.691770
Epoch: 83 	Training Loss: 3.730255 	Validation Loss: 3.709374
Epoch: 84 	Training Loss: 3.733787 	Validation Loss: 3.668675
Validation loss decreased (3.686563 --> 3.668675).  Saving model ...
Epoch: 85 	Training Loss: 3.719435 	Validation Loss: 3.819196
Epoch: 86 	Training Loss: 3.711062 	Validation Loss: 3.689024
Epoch: 87 	Training Loss: 3.703333 	Validation Loss: 3.630904
Validation loss decreased (3.668675 --> 3.630904).  Saving model ...
Epoch: 88 	Training Loss: 3.701375 	Validation Loss: 3.622853
Validation loss decreased (3.630904 --> 3.622853).  Saving model ...
Epoch: 89 	Training Loss: 3.675306 	Validation Loss: 3.592590
Validation loss decreased (3.622853 --> 3.592590).  Saving model ...
Epoch: 90 	Training Loss: 3.655603 	Validation Loss: 3.644053
Epoch: 91 	Training Loss: 3.612857 	Validation Loss: 3.576004
Validation loss decreased (3.592590 --> 3.576004).  Saving model ...
Epoch: 92 	Training Loss: 3.613791 	Validation Loss: 3.558521
Validation loss decreased (3.576004 --> 3.558521).  Saving model ...
Epoch: 93 	Training Loss: 3.600635 	Validation Loss: 3.633176
Epoch: 94 	Training Loss: 3.582585 	Validation Loss: 3.556488
Validation loss decreased (3.558521 --> 3.556488).  Saving model ...
Epoch: 95 	Training Loss: 3.596127 	Validation Loss: 3.543610
Validation loss decreased (3.556488 --> 3.543610).  Saving model ...
Epoch: 96 	Training Loss: 3.560014 	Validation Loss: 3.563526
Epoch: 97 	Training Loss: 3.562244 	Validation Loss: 3.446909
Validation loss decreased (3.543610 --> 3.446909).  Saving model ...
Epoch: 98 	Training Loss: 3.547410 	Validation Loss: 3.490575
Epoch: 99 	Training Loss: 3.528816 	Validation Loss: 3.675814
Epoch: 100 	Training Loss: 3.510213 	Validation Loss: 3.491242

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.

In [37]:
def test(loaders, model, criterion, use_cuda):

    # monitor test loss and accuracy
    test_loss = 0.
    correct = 0.
    total = 0.

    for batch_idx, (data, target) in enumerate(loaders['test']):
        # move to GPU
        if use_cuda:
            data, target = data.cuda(), target.cuda()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # update average test loss 
        test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
        # convert output probabilities to predicted class
        pred = output.data.max(1, keepdim=True)[1]
        # compare predictions to true label
        correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
        total += data.size(0)
            
    print('Test Loss: {:.6f}\n'.format(test_loss))

    print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
        100. * correct / total, correct, total))

# call test function    
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
Test Loss: 3.507678


Test Accuracy: 15% (128/836)

Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)

You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).

If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.

In [41]:
## TODO: Specify data loaders


data_loaders_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.RandomRotation(10),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'valid': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'test': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
}

batch_size = 20
num_workers = 5
data_dir = 'dogImages/'

img_datasets = {}
loaders_transfer = {}
for data_loader_type in data_loaders_transforms.keys():
    # path to each type of data (train, valid, test)
    dir_path = os.path.join(data_dir, data_loader_type)
    
    # transform data using ImageFolder
    img_datasets[data_loader_type] = datasets.ImageFolder(dir_path, data_loaders_transforms[data_loader_type])

    # prepare data loaders
    loaders_transfer[data_loader_type] = torch.utils.data.DataLoader(img_datasets[data_loader_type], batch_size=batch_size, shuffle=True, num_workers=num_workers)

(IMPLEMENTATION) Model Architecture

Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.

In [39]:
import torchvision.models as models
import torch.nn as nn

## TODO: Specify model architecture 
model_transfer = models.vgg16(pretrained=True)
print(model_transfer.classifier[6].in_features) 
print(model_transfer.classifier[6].out_features) 

# freeze training for all "features" layers
for param in model_transfer.features.parameters():
    param.require_grad = False
        
n_inputs = model_transfer.classifier[6].in_features

# add last linear layer (n_inputs -> 133 god breed classes)
# new layers automatically have requires_grad = True
last_layer = nn.Linear(n_inputs, 133) # len(classes))
model_transfer.classifier[6] = last_layer

# check to see that your last layer produces the expected number of outputs
print(model_transfer.classifier[6].out_features)
print(model_transfer)

# if GPU is available, move the model to GPU
if use_cuda:
    model_transfer = model_transfer.cuda()
4096
1000
133
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace)
    (2): Dropout(p=0.5)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace)
    (5): Dropout(p=0.5)
    (6): Linear(in_features=4096, out_features=133, bias=True)
  )
)

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answer:

I am using a pretrained VGG16 again because it is simple and affective. The VGG-16 model is robust with noise, has low training and validation loss with minimal training time, and high testing accuracy. I made small modifications to the default architecture for our problem of dog breed classification. Really all I did was change the final classifier layer to output 133 features ,instead of 1000 features, for the number of dog breeds we were predicting from.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.

In [42]:
# loss function (categorical cross-entropy)
criterion_transfer = nn.CrossEntropyLoss()

# optimizer (stochastic gradient descent) and learning rate = 0.001
optimizer_transfer = optim.SGD(model_transfer.classifier.parameters(), lr=0.001)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.

In [43]:
# train the model
n_epochs = 20

model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
Epoch: 1 	Training Loss: 4.328254 	Validation Loss: 3.186509
Validation loss decreased (inf --> 3.186509).  Saving model ...
Epoch: 2 	Training Loss: 2.839885 	Validation Loss: 1.643686
Validation loss decreased (3.186509 --> 1.643686).  Saving model ...
Epoch: 3 	Training Loss: 2.000135 	Validation Loss: 1.116457
Validation loss decreased (1.643686 --> 1.116457).  Saving model ...
Epoch: 4 	Training Loss: 1.631395 	Validation Loss: 0.889863
Validation loss decreased (1.116457 --> 0.889863).  Saving model ...
Epoch: 5 	Training Loss: 1.474383 	Validation Loss: 0.772559
Validation loss decreased (0.889863 --> 0.772559).  Saving model ...
Epoch: 6 	Training Loss: 1.360589 	Validation Loss: 0.745051
Validation loss decreased (0.772559 --> 0.745051).  Saving model ...
Epoch: 7 	Training Loss: 1.275256 	Validation Loss: 0.713657
Validation loss decreased (0.745051 --> 0.713657).  Saving model ...
Epoch: 8 	Training Loss: 1.236678 	Validation Loss: 0.641537
Validation loss decreased (0.713657 --> 0.641537).  Saving model ...
Epoch: 9 	Training Loss: 1.193917 	Validation Loss: 0.660886
Epoch: 10 	Training Loss: 1.173808 	Validation Loss: 0.632406
Validation loss decreased (0.641537 --> 0.632406).  Saving model ...
Epoch: 11 	Training Loss: 1.127233 	Validation Loss: 0.600736
Validation loss decreased (0.632406 --> 0.600736).  Saving model ...
Epoch: 12 	Training Loss: 1.101205 	Validation Loss: 0.612260
Epoch: 13 	Training Loss: 1.071264 	Validation Loss: 0.594063
Validation loss decreased (0.600736 --> 0.594063).  Saving model ...
Epoch: 14 	Training Loss: 1.073403 	Validation Loss: 0.568002
Validation loss decreased (0.594063 --> 0.568002).  Saving model ...
Epoch: 15 	Training Loss: 1.018510 	Validation Loss: 0.566380
Validation loss decreased (0.568002 --> 0.566380).  Saving model ...
Epoch: 16 	Training Loss: 1.018137 	Validation Loss: 0.538149
Validation loss decreased (0.566380 --> 0.538149).  Saving model ...
Epoch: 17 	Training Loss: 1.013126 	Validation Loss: 0.588115
Process Process-1184:
Process Process-1182:
Process Process-1183:
Process Process-1185:
Process Process-1181:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
Traceback (most recent call last):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
    r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
    r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
    r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
    r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 96, in _worker_loop
    r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/multiprocessing/connection.py", line 911, in wait
    ready = selector.select(timeout)
  File "/anaconda3/envs/deep-learning/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
  File "/anaconda3/envs/deep-learning/lib/python3.6/selectors.py", line 376, in select
    fd_event_list = self._poll.poll(timeout)
KeyboardInterrupt
KeyboardInterrupt
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x2013565c0>>
Traceback (most recent call last):
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 397, in __del__
    def __del__(self):
  File "/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 227, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 51890) exited unexpectedly with exit code 1. Details are lost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-43-12cf18a614fa> in <module>()
      2 n_epochs = 20
      3 
----> 4 model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
      5 
      6 # load the model that got the best validation accuracy (uncomment the line below)

<ipython-input-36-0131cfad1eb2> in train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path)
     21 
     22             # forward pass: compute predicted outputs by passing inputs to the model
---> 23             output = model(data)
     24 
     25             # calculate the batch loss

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/vgg.py in forward(self, x)
     40 
     41     def forward(self, x):
---> 42         x = self.features(x)
     43         x = x.view(x.size(0), -1)
     44         x = self.classifier(x)

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
     89     def forward(self, input):
     90         for module in self._modules.values():
---> 91             input = module(input)
     92         return input
     93 

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

/anaconda3/envs/deep-learning/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

KeyboardInterrupt: 
In [44]:
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.

In [45]:
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Test Loss: 0.640439


Test Accuracy: 80% (669/836)

(IMPLEMENTATION) Predict Dog Breed with the Model

Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.

In [49]:
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.

# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in img_datasets['train'].classes]


# takes a path to an image as input and returns the dog breed that is predicted by the model.
def predict_breed_transfer(img_path):
    # load the image and return the predicted breed
    img = Image.open(img_path)
    plt.imshow(img)
    plt.show() 

    model_transfer.eval()
    img_tensor = img_path_to_tensor(img_path)
    img_ready = img_tensor.unsqueeze(0)        
    with torch.no_grad():
        results = model_transfer(img_ready)
        results = results.data.numpy().argmax()
    return class_names[results]

for img_p in dog_files[:15]:
    print(img_p, "\n", predict_breed_transfer(img_p), "\n\n")
dogImages/valid/122.Pointer/Pointer_07831.jpg 
 Pointer 


dogImages/valid/122.Pointer/Pointer_07826.jpg 
 Pointer 


dogImages/valid/122.Pointer/Pointer_07834.jpg 
 American foxhound 


dogImages/valid/122.Pointer/Pointer_07808.jpg 
 Pointer 


dogImages/valid/069.French_bulldog/French_bulldog_04807.jpg 
 French bulldog 


dogImages/valid/069.French_bulldog/French_bulldog_04792.jpg 
 French bulldog 


dogImages/valid/069.French_bulldog/French_bulldog_04784.jpg 
 French bulldog 


dogImages/valid/069.French_bulldog/French_bulldog_04775.jpg 
 French bulldog 


dogImages/valid/069.French_bulldog/French_bulldog_04770.jpg 
 French bulldog 


dogImages/valid/069.French_bulldog/French_bulldog_04764.jpg 
 French bulldog 


dogImages/valid/124.Poodle/Poodle_07946.jpg 
 Poodle 


dogImages/valid/124.Poodle/Poodle_07948.jpg 
 Poodle 


dogImages/valid/124.Poodle/Poodle_07913.jpg 
 Poodle 


dogImages/valid/124.Poodle/Poodle_07905.jpg 
 Poodle 


dogImages/valid/124.Poodle/Poodle_07911.jpg 
 Poodle 



Step 5: Write your Algorithm

Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and human_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.

Some sample output for our algorithm is provided below, but feel free to design your own user experience!

Sample Human Output

(IMPLEMENTATION) Write your Algorithm

In [50]:
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.

def run_app(img_path):
    # handle cases for a human face, dog, and neither
    img = Image.open(img_path)
    plt.imshow(img)
    
    if dog_detector(img_path):
        print("Hello, Dog.\nYour predicted breed is ...\n{}.".format(predict_breed_transfer(img_path)))
    elif face_detector_fr(img_path):
        print("Hello, human.\nYou look like a ...\n{}".format(predict_breed_transfer(img_path))) 
    else:
        print("Error, There isn't a dog nor a human.")
    plt.show()

Step 6: Test Your Algorithm

In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

(IMPLEMENTATION) Test Your Algorithm on Sample Images!

Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answer: (Three possible points for improvement)

My example dataset has 12 images. These images contain a single subject centering a human, a dog, a cartoon dog, an animals other than a dog, or a landscape. The results show a perfect output, exactly what I expected. I know that that there is still room for improvement. With the next iteration of this project, I will train the model with a larger dataset and more epoches. Another improvement I will like to make is having the model classify multiple dogs in an image and classify a dog and human seperately. A fun improvement would be to introduce popular cartoon/anamated dog characters to the training set.

In [55]:
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
example_files = np.array(glob("example_dataset/*"))
## suggested code, below
for file in np.hstack(example_files):
    run_app(file)
Hello, Dog.
Your predicted breed is ...
Dalmatian.
Hello, human.
You look like a ...
Greyhound
Hello, Dog.
Your predicted breed is ...
Collie.
Hello, human.
You look like a ...
Dachshund
Error, There isn't a dog nor a human.
Hello, Dog.
Your predicted breed is ...
Bulldog.
Error, There isn't a dog nor a human.
Error, There isn't a dog nor a human.
Error, There isn't a dog nor a human.
Error, There isn't a dog nor a human.
Hello, Dog.
Your predicted breed is ...
Portuguese water dog.
Hello, human.
You look like a ...
German wirehaired pointer